Enhancements to the Sequence-to-Sequence-Based Natural Answer Generation Models
نویسندگان
چکیده
منابع مشابه
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation
Sequence-to-sequence models have shown strong performance across a broad range of applications. However, their application to parsing and generating text using Abstract Meaning Representation (AMR) has been limited, due to the relatively limited amount of labeled data and the nonsequential nature of the AMR graphs. We present a novel training procedure that can lift this limitation using millio...
متن کاملUnlabeled Data for Morphological Generation With Character-Based Sequence-to-Sequence Models
We present a semi-supervised way of training a character-based encoderdecoder recurrent neural network for morphological reinflection, the task of generating one inflected word form from another. This is achieved by using unlabeled tokens or random string as training data for an autoencoding task, adapting a network for morphological reinflection, and performing multi-task training. We thus use...
متن کاملOn a sequence related to the coprime integers
The asymptotic behaviour of the sequence with general term $P_n=(varphi(1)+varphi(2)+cdots+varphi(n))/(1+2+cdots+n)$, is studied which appears in the studying of coprime integers, and an explicit bound for the difference $P_n-6/pi^2$ is found.
متن کاملVariational Attention for Sequence-to-Sequence Models
The variational encoder-decoder (VED) encodes source information as a set of random variables using a neural network, which in turn is decoded into target data using another neural network. In natural language processing, sequenceto-sequence (Seq2Seq) models typically serve as encoder-decoder networks. When combined with a traditional (deterministic) attention mechanism, the variational latent ...
متن کاملAuthor Masking using Sequence-to-Sequence Models
The paper describes the approach adopted for Author Masking Task at PAN 2017. For the purpose of masking the original author, we use the combination of methods based either on deep learning approach or traditional methods of obfuscation. We obtain sample of obfuscated sentences from original one and choose best of them using language model. We try to change both the content and length of origin...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Access
سال: 2020
ISSN: 2169-3536
DOI: 10.1109/access.2020.2978551